6 research outputs found

    A study of effective evaluation models and practices for technology supported physical learning spaces (JELS)

    Get PDF
    The aim of the JELS project was to identify and review the tools, methods and frameworks used to evaluate technology supported or enhanced physical learning spaces. A key objective was to develop the sector knowledgebase on innovation and emerging practice in the evaluation of learning spaces, identifying innovative methods and approaches beyond traditional post-occupancy evaluations and surveys that have dominated this area to date. The intention was that the frameworks and guidelines discovered or developed from this study could inform all stages of the process of implementing a technology supported physical learning space. The study was primarily targeted at the UK HE sector and the FE sector where appropriate, and ran from September 2008 to March 2009

    Doing learning space evaluations

    No full text
    In this chapter we argue that evaluating learning spaces is a valuable activity that can generate operational insights into how physical space affects learning, and can thus feed into processes of learning space design. The broader context is a desire to improve learning by designing better spaces within post-compulsory education

    Evaluating Learning Spaces Study Final Report

    No full text
    Executive Summary The aim of the JELS project was to identify and review the tools, methods and frameworks used to evaluate technology supported or enhanced physical learning spaces. A key objective was to develop the sector knowledgebase on innovation and emerging practice in the evaluation of learning spaces, identifying innovative methods and approaches beyond traditional post-occupancy evaluations and surveys that have dominated this area to date. The intention was that the frameworks and guidelines discovered or developed from this study could inform all stages of the process of implementing a technology supported physical learning space. The study was primarily targeted at the UK HE sector and the FE sector where appropriate, and ran from September 2008 to March 2009. Our initial investigations showed that although institutions were keen to advertise new or innovative learning spaces, the practice of evaluating such spaces was not made readily visible and was thus harder to identify or track. A key finding to emerge from the study was that if evaluations were undertaken they occurred as part of an internal institutional process, typically prompted as part of a student satisfaction survey, of which the outputs were not ordinarily deemed to be for external consumption. This has limited the extent to which knowledge sharing about learning spaces has been promoted across the whole educational community. In the main, during the course of the study we found few new methods or technologies being used for evaluation purposes, with only 20% of evaluators interviewed reporting using Web 2.0 or multimedia technologies to enable them to conduct their evaluation. Even though the need to evaluate the teaching and learning within a space was recognised by most institutions, this tended not to be the main driver for the evaluation. The strongest driver of (internal) evaluations was the National Student Survey. The ability of existing post-occupancy/student satisfaction surveys to address this appeared to be the main reason why more extensive and innovative methods for evaluation were not being developed. There was also a desire to ensure that the institutional space was being used in line with design ambitions and that occupancy/footfall had increased. Less than a third of evaluations studied made use of any sort of baseline data, therefore limiting the extent to which impact could be fully assessed. A tension also existed between evaluation studies and research into student learning; an evaluation that proposed to go beyond the ‘student experience’ might be seen as a research activity and so not warranting central institutional support. There were some exceptions to these broad findings, one of which was the framework and methods being used at The University of Sheffield by Professor Philippa Levy and her colleagues in CILASS (Centre for Inquiry-based Learning in the Arts and Social Sciences). These are outlined later in this report (p16). In summary, the study identified a need for the educational sector as a whole to reconsider how to evaluate physical learning spaces, so as to more clearly assess how they satisfy design intentions and teaching and learning needs. As a step towards addressing this issue, we have proposed a conceptual Framework for Evaluating Learning Spaces (FELS). It builds upon input gathered through the interviews and at the project workshops. This framework is intended to offer a common vocabulary for evaluation, based around the interplay of five key factors: intentions, context, practice, designs and procedures. Broadly, it prompts the following questions: Why is the evaluation taking place? What is being evaluated? How will the evaluation be constructed? The framework may be used to identify existing patterns within current evaluation studies, as well as a checklist to prompt new and more insightful evaluations in the future. The framework needs to be extended, tested and validated with ‘live’ cases in order to prove its utility. We welcome reviews and comments on the framework as a basis for enabling new and more innovative evaluations of learning spaces in the future
    corecore